Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add toProxyObjectLambda method to Parser #119

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

defunctzombie
Copy link
Contributor

A proxy object allows for "lazy" deserialization. Rather than having to de-serialize the entire message, only the specific field is de-serialized.

This implementation does not "cache" the already de-serialized data though that could be an option. In theory, a user could build this same method themselves using the already exposed methods on the Parser but it seems nice to provide this as a tested approach that ships with the library.

In our application we noticed a significant performance improvement for workflows which did not need to access all of the fields of a message.

I have not benchmarked in isolation the runtime tradeoff when you know you need to deserialize the entire message but some of our higher-level benchmarks were flat (i.e. no regression) which I generally view as a good sign.

The other tradeoff, which I've noted in the function comment, is the difference in memory profile for the two variants. Having data in JS heap space fully deserialized vs still in the typedarray is a big difference for some applications but might not be appropriate for others (depending on how the Tables) are created.

@defunctzombie
Copy link
Contributor Author

cc @jkuszmaul @jtbandes for discussion

Copy link
Owner

@jkuszmaul jkuszmaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Conceptually happy with this; future improvements might make it more recursive for allowing deeper field access, but that might also might have other negative performance impacts.

// This tends to correspond with the order in the original .fbs (unless ids were specified manually).
fieldLambdas.sort((a, b) => a.id - b.id);

// Loopup the function to read a field using the field name
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lookup? Or does loopup mean something?


// Sort fields by ID so the resulting object is built with fields in this order.
// This tends to correspond with the order in the original .fbs (unless ids were specified manually).
fieldLambdas.sort((a, b) => a.id - b.id);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feels like there's some way we should be able to deduplicate the error-checking and field sorting between this and toObjectLambda?

{},
{
get(_target, property) {
return fieldNameToReadFn.get(property)?.(t);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make sure I am not missing something: This just defers the deserialization of top-level fields in an object? If you have a highly-nested object, as soon as you want to access one sub-field anywhere inside of one of the top-level fields you have to deserialize the entirety of that top-level field (i.e., this is not recursive)?

Should probably note that in the function comment.

getOwnPropertyDescriptor(_target, _property) {
return {
writable: false,
configurable: true,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is configurable: true correct? Not familiar with the subtleties of this, but it seems overly permissive (although presumably not a problem) based on a superficial reading of https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/getOwnPropertyDescriptor

@jtbandes
Copy link
Collaborator

@defunctzombie just wondering if you're planning to clean this up & get it merged?

@defunctzombie
Copy link
Contributor Author

@defunctzombie just wondering if you're planning to clean this up & get it merged?

Yes - though not gonna happen before the new year given other priorities.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants